125 research outputs found

    An effect of simplifying magic rules for answering recursive queries in deductive databases

    Get PDF
    The basic magic sets transformation algorithm for rewriting logical rules in deductive databases is very clear and straightforward. However, rules generated by the algorithm for answering queries are too many compared to the original rules. Therefore, it is useful to simplify the generated rules before they are evaluated. This paper reports the study on the effect of simplifying such rules from the aspect of computing time. It is concluded that the improvement as a result of simplification is quite significant

    A method of estimating aborted transaction in the database concurrency control system

    Get PDF
    Transactions may be aborted when they are unable to obtain a lock on a required data item. Estimating the proportion of transaction that aborts is one of the key issues in modelling a system which affect the performance measures of interest such as average response time and the throughput capacity of the system. This paper shows a method of estimating aborted transaction and performs a comparative study with other method given by Mitrani et al

    Soft set approach for clustering web user transactions

    Get PDF
    Rough set theory provides a methodology for data analysis based on the approximation of information systems. It is revolves around the notion of discernibly i.e. the ability to distinguish between objects based on their attributes value. It allows inferring data dependencies that are useful in the fields of feature selection and decision model construction. Since it is proven that every rough set is a soft set, therefore, within the context of soft sets theory, we present a soft set-based framework for partition attribute selection. The paper unifies existing work in this direction, and introduces the concepts of maximum attribute relative to determine and rank the attribute in the multi-valued information system. Experimental results demonstrate the potentiality of the proposed technique to discover the attribute subsets, leading to partition selection models which better coverage and achieve lower computational time than that the baseline techniques

    Efficient Access of Replicated Data in Distributed Database Systems

    Get PDF
    Replication is a useful technique for distributed database systems where a data object will be accessed (Le., read and written) from multiple locations such as from a local area network environment or geographically distributed world wide. This technique is used to provide high availability, fault tolerance, and enhanced performance. This research addresses the performance of data replication protocol in terms of data availability and communication costs. Specifically, this thesis present a new protocol called Three Dimensional Grid Structure (TDGS) protocol, to manage data replication in distributed database systems (DDS). The TDGS protocol is based on the logical structure of sites/serquorum in the DDS. The protocol provide high availability for read and write operations with limited fault-tolerance at low communication cost. With TDGS protocol, a read operation is limited to two data copies, while a write operation is required with minimal number of copies. In comparison to other protocols, TDGS requires lower communication cost for an operation, while providing higher data availability . A system for building reliable computing over TDGS Remote Procedure (TDGSRP) system has also been described in this research. The system combines the replication and transaction techniques and embeds these techniques into the TDGS-RP system. The model describes the models for replicas, TDGS-RP, transactions, and the algorithms for managing transactions, and replicas

    Soft set approach for categorical data clustering and maximal association rules mining

    Get PDF
    Recent advances in 'information technology have led to significant changes in today's world; both generating and collecting data have been increasing rapidly. This explosive growth h stored or transient data has generated an urgent need for new techniques that caa intelligently assist us in transforming the vast amounts of data into usell information and knowledge. Classification is one form of data analysis in data mining, which can be used to extract models describing important data classes. Researchers have proposed many classification methods. An important point is that each technique typically suits some pr~blemsb etter than others do, Thus, there is no universal data-mining method, In 1999, Mofodtsov initiated the concept of soft set theory as a mathematical tool for dealing with uncertainties. The sufi set theory has rr rich patentid for applications in several directions. However, application of soft set theory on data classification still not widely studies. There are few researches of data classification based on soft set theory. Although those methods are quite successful for data classification, however they are still need improvement. This research aim to propose a new approach to classified data based on soft set theory, to improve the accuracy and efficiency. It is called Fuzzy Soft Set Classifier (FSSC

    Enhanced divide-and-conquer algorithm with 2-block policy

    Get PDF
    The number of comparisons involved in searching minimum and maximum elements from a set of data will determine the performance of an algorithm. A Divide-and-Conquer algorithm is the most efficient algorithm for searching minimum and maximum elements of a set of data of any size. However, the performance of this algorithm can still be improved by reducing the number of comparisons of certain sets of data. In this paper a 2-block (2B) policy under the divide-and-conquer technique is proposed in order to deal with this problem. On the basis of this policy, the divide-and-conquer algorithm is enhanced. It is shown that the performance of the proposed algorithm performs equally at par when compared with the established algorithm of data size of power of two and better when compared with data size of not a power of two

    Non-Probabilistic Inverse Fuzzy Model in Time Series Forecasting

    Get PDF
    Many models and techniques have been proposed by researchers to improve forecasting accuracy using fuzzy time series. However, very few studies have tackled problems that involve inverse fuzzy function into fuzzy time series forecasting. In this paper, we modify inverse fuzzy function by considering new factor value in establishing the forecasting model without any probabilistic approaches. The proposed model was evaluated by comparing its performance with inverse and non�inverse fuzzy time series models in forecasting the yearly enrollment data of several universities, such as Alabama University, Universiti Teknologi Malaysia (UTM), and QiongZhou University; the yearly car accidents in Belgium; and the monthly Turkish spot gold price. The results suggest that the proposed model has potential to improve the forecasting accuracy compared to the existing inverse and non-inverse fuzzy time series models. This paper contributes to providing the better future forecast values using the systematic rules. Keywords: Fuzzy time series, inverse fuzzy function, non-probabilistic model, non-inverse fuzzy model, future forecas

    Prediction of Malaysian–Indonesian Oil Production and Consumption Using Fuzzy Time Series Model

    Get PDF
    Fuzzy time series has been implemented for data prediction in the various sectors, such as education, finance-economic, energy, traffic accident, others. Moreover, many proposed models have been presented to improve the forecasting accuracy. However, the interval-length adjustment and the out-sample forecast procedure are still issues in fuzzy time series forecasting, where both issues are yet clearly investigated in the pre�vious studies. In this paper, a new adjustment of the interval-length and the partition number of the data set is proposed. Additionally, the determining of the out-sample forecast is also discussed. The yearly oil production (OP) and oil consumption (OC) of Malaysia and Indonesia from 1965 to 2012 are examined to evaluate the performance of fuzzy time series and the probabilistic time series models. The result indicates that the fuzzy time series model is better than the probabilistic models, such as regression time series, exponential smoothing in terms of the forecasting accuracy. This paper thus highlights the effect of the proposed interval length in reducing the forecasting error sig�nificantly, as well as the main differences between the fuzzy and probabilistic time series models. Keywords: Fuzzy time series; index of linguistic; oil production–consumption; interval�length; forecasting accurac
    corecore